82 research outputs found
Covering With Tensor Products and Powers
We study when a tensor product of irreducible representations of the
symmetric group contains all irreducibles as subrepresentations - we say
such a tensor product covers . Our results show that this behavior
is typical. We first give a general criterion for such a tensor product to have
this property. Using this criterion we show that the tensor product of a
constant number of random irreducibles covers asymptotically
almost surely. We also consider, for a fixed irreducible representation, the
degree of tensor power needed to cover . We show that the simple
lower bound based on dimension is tight up to a universal constant factor for
every irreducible representation, as was recently conjectured by Liebeck,
Shalev, and Tiep
Free Energy Subadditivity for Symmetric Random Hamiltonians
We consider a random Hamiltonian defined on a compact
space that admits a transitive action by a compact group .
When the law of is -invariant, we show its expected free energy
relative to the unique -invariant probability measure on
obeys a subadditivity property in the law of itself. The bound is often
tight for weak disorder and relates free energies at different temperatures
when is a Gaussian process. Many examples are discussed including branching
random walk, several spin glasses, random constraint satisfaction problems, and
the random field Ising model. We also provide a generalization to quantum
Hamiltonians with applications to the quantum SK and SYK models
Incentivizing Exploration with Linear Contexts and Combinatorial Actions
We advance the study of incentivized bandit exploration, in which arm choices
are viewed as recommendations and are required to be Bayesian incentive
compatible. Recent work has shown under certain independence assumptions that
after collecting enough initial samples, the popular Thompson sampling
algorithm becomes incentive compatible. We give an analog of this result for
linear bandits, where the independence of the prior is replaced by a natural
convexity condition. This opens up the possibility of efficient and
regret-optimal incentivized exploration in high-dimensional action spaces. In
the semibandit model, we also improve the sample complexity for the
pre-Thompson sampling phase of initial data collection.Comment: International Conference on Machine Learning (ICML) 202
Approximate Ground States of Hypercube Spin Glasses are Near Corners
We show that with probability exponentially close to , all near-maximizers
of any mean-field mixed -spin glass Hamiltonian on the hypercube
are near a corner. This confirms a recent conjecture of Gamarnik and Jagannath.
The proof is elementary and generalizes to arbitrary polytopes with
faces
On Size-Independent Sample Complexity of ReLU Networks
We study the sample complexity of learning ReLU neural networks from the
point of view of generalization. Given norm constraints on the weight matrices,
a common approach is to estimate the Rademacher complexity of the associated
function class. Previously Golowich-Rakhlin-Shamir (2020) obtained a bound
independent of the network size (scaling with a product of Frobenius norms)
except for a factor of the square-root depth. We give a refinement which often
has no explicit depth-dependence at all.Comment: 4 page
The Price of Incentivizing Exploration: A Characterization via Thompson Sampling and Sample Complexity
We consider incentivized exploration: a version of multi-armed bandits where
the choice of arms is controlled by self-interested agents, and the algorithm
can only issue recommendations. The algorithm controls the flow of information,
and the information asymmetry can incentivize the agents to explore. Prior work
achieves optimal regret rates up to multiplicative factors that become
arbitrarily large depending on the Bayesian priors, and scale exponentially in
the number of arms. A more basic problem of sampling each arm once runs into
similar factors.
We focus on the price of incentives: the loss in performance, broadly
construed, incurred for the sake of incentive-compatibility. We prove that
Thompson Sampling, a standard bandit algorithm, is incentive-compatible if
initialized with sufficiently many data points. The performance loss due to
incentives is therefore limited to the initial rounds when these data points
are collected. The problem is largely reduced to that of sample complexity: how
many rounds are needed? We address this question, providing matching upper and
lower bounds and instantiating them in various corollaries. Typically, the
optimal sample complexity is polynomial in the number of arms and exponential
in the "strength of beliefs"
Strong Topological Trivialization of Multi-Species Spherical Spin Glasses
We study the landscapes of multi-species spherical spin glasses. Our results
determine the phase boundary for annealed trivialization of the number of
critical points, and establish its equivalence with a quenched \emph{strong
topological trivialization} property. Namely in the "trivial" regime, the
number of critical points is constant, all are well-conditioned, and all
approximate critical points are close to a true critical point. As a
consequence, we deduce that Langevin dynamics at sufficiently low temperature
has logarithmic mixing time.
Our approach begins with the Kac--Rice formula. We derive closed form
expressions for some asymptotic determinants studied in (Ben
Arous-Bourgade-McKenna 2023, McKenna 2021), and characterize the annealed
trivialization phase by explicitly solving a suitable multi-dimensional
variational problem. To obtain more precise quenched results, we develop
general purpose techniques to avoid sub-exponential correction factors and show
non-existence of \emph{approximate} critical points. Many of the results are
new even in the -species case.Comment: 57 pages, 4 figures. Updated reference
- …